61 research outputs found

    Motion Extrapolation in the Central Fovea

    Get PDF
    Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination). A recent “correction-for-extrapolation” hypothesis suggests that the absence of forward shifts is caused by sensory signals representing ‘failed’ predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea

    Variation in the “coefficient of variation”

    Get PDF
    The coefficient of variation (CV), also known as relative standard deviation, has been used to measure the constancy of the Weber fraction, a key signature of efficient neural coding in time perception. It has long been debated whether or not duration judgments follow Weber's law, with arguments based on examinations of the CV. However, what has been largely ignored in this debate is that the observed CVs may be modulated by temporal context and decision uncertainty, thus questioning conclusions based on this measure. Here, we used a temporal reproduction paradigm to examine the variation of the CV with two types of temporal context: full-range mixed vs. sub-range blocked intervals, separately for intervals presented in the visual and auditory modalities. We found a strong contextual modulation of both interval-duration reproductions and the observed CVs. We then applied a two-stage Bayesian model to predict those variations. Without assuming a violation of the constancy of the Weber fraction, our model successfully predicted the central-tendency effect and the variation in the CV. Our findings and modeling results indicate that both the accuracy and precision of our timing behavior are highly dependent on the temporal context and decision uncertainty. And, critically, they advise caution with using variations of the CV to reject the constancy of the Weber fraction of duration estimation

    Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search

    Get PDF
    Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target's orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing `contextual cueing'. This effect was enhanced in the multisensory session---importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift--diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone

    Learning to suppress likely distractor locations in visual search is driven by the local distractor frequency

    Get PDF
    Salient but task-irrelevant distractors interfere less with visual search when they appear in a display region where distractors have appeared more frequently in the past (‘distractor-location probability cueing’). This effect could reflect the (re-)distribution of a global, limited attentional ‘inhibition resource’. Accordingly, changing the frequency of distractor appearance in one display region should also affect the magnitude of interference generated by distractors in a different region. Alternatively, distractor-location learning may reflect a local response (e.g., ‘habituation’) to distractors occurring at a particular location. In this case, the local distractor frequency in one display region should not affect distractor interference in a different region. To decide between these alternatives, we conducted three experiments in which participants searched for an orientation-defined target while ignoring a more salient orientation distractor that occurred more often in one vs. another display region. Experiment 1 varied the ratio of distractors appearing in the frequent vs. rare regions (60/40–90/10), with a fixed global distractor frequency. The results revealed the cueing effect to increase with increasing probability ratio. In Experiments 2 and 3, one (‘test’) region was assigned the same local distractor frequency as in one of the conditions of Experiment 1, but a different frequency in the other region – dissociating local from global distractor frequency. Together, the three experiments showed that distractor interference in the test region was not significantly influenced by the frequency in the other region, consistent with purely local learning. We discuss the implications for theories of statistical distractor-location learning

    Temporal bisection is influenced by ensemble statistics of the stimulus set

    Get PDF
    Although humans are well capable of precise time measurement, their duration judgments are nevertheless susceptible to temporal context. Previous research on temporal bisection has shown that duration comparisons are influenced by both stimulus spacing and ensemble statistics. However, theories proposed to account for bisection performance lack a plausible justification of how the effects of stimulus spacing and ensemble statistics are actually combined in temporal judgments. To explain the various contextual effects in temporal bisection, we develop a unified ensemble-distribution account (EDA), which assumes that the mean and variance of the duration set serve as a reference, rather than the short and long standards, in duration comparison. To validate this account, we conducted three experiments that varied the stimulus spacing (Experiment 1), the frequency of the probed durations (Experiment 2), and the variability of the probed durations (Experiment 3). The results revealed significant shifts of the bisection point in Experiments 1 and 2, and a change of the sensitivity of temporal judgments in Experiment 3-which were all well predicted by EDA. In fact, comparison of EDA to the extant prior accounts showed that using ensemble statistics can parsimoniously explain various stimulus set-related factors (e.g., spacing, frequency, variance) that influence temporal judgments

    Cross-modal contextual memory guides selective attention in visual-search tasks

    Get PDF
    Visual search is speeded when a target item is positioned consistently within an invariant (repeatedly encountered) configuration of distractor items (contextual cueing). Contextual cueing is also observed in cross-modal search, when the location of the-visual-target is predicted by distractors from another-tactile-sensory modality. Previous studies examining lateralized waveforms of the event-related potential (ERP) with millisecond precision have shown that learned visual contexts improve a whole cascade of search-processing stages. Drawing on ERPs, the present study tested alternative accounts of contextual cueing in tasks in which distractor-target contextual associations are established across, as compared to, within sensory modalities. To this end, we devised a novel, cross-modal search task: search for a visual feature singleton, with repeated (and nonrepeated) distractor configurations presented either within the same (visual) or a different (tactile) modality. We found reaction times (RTs) to be faster for repeated versus nonrepeated configurations, with comparable facilitation effects between visual (unimodal) and tactile (crossmodal) context cues. Further, for repeated configurations, there were enhanced amplitudes (and reduced latencies) of ERPs indexing attentional allocation (PCN) and postselective analysis of the target (CDA), respectively;both components correlated positively with the RT facilitation. These effects were again comparable between uni- and crossmodal cueing conditions. In contrast, motor-related processes indexed by the response-locked LRP contributed little to the RT effects. These results indicate that both uni- and crossmodal context cues benefit the same, visual processing stages related to the selection and subsequent analysis of the search target

    Influences of luminance contrast and ambient lighting on visual context learning and retrieval

    Get PDF
    Invariant spatial context can guide attention and facilitate visual search, an effect referred to as “contextual cueing.” Most previous studies on contextual cueing were conducted under conditions of photopic vision and high search item to background luminance contrast, leaving open the question whether the learning and/or retrieval of context cues depends on luminance contrast and ambient lighting. Given this, we conducted three experiments (each contains two subexperiments) to compare contextual cueing under different combinations of luminance contrast (high/low) and ambient lighting (photopic/mesopic). With high-contrast displays, we found robust contextual cueing in both photopic and mesopic environments, but the acquired contextual cueing could not be transferred when the display contrast changed from high to low in the photopic environment. By contrast, with low-contrast displays, contextual facilitation manifested only in mesopic vision, and the acquired cues remained effective following a switch to high-contrast displays. This pattern suggests that, with low display contrast, contextual cueing benefited from a more global search mode, aided by the activation of the peripheral rod system in mesopic vision, but was impeded by a more local, fovea-centered search mode in photopic vision

    Duration reproduction under memory pressure: Modeling the roles of visual memory load in duration encoding and reproduction

    Get PDF
    Duration estimates are often biased by the sampled statistical context, yielding the classical central-tendency effect, i.e., short durations are over- and long duration underestimated. Most studies of the central-tendency bias have primarily focused on the integration of the sensory measure and the prior information, without considering any cognitive limits. Here, we investigated the impact of cognitive (visual working-memory) load on duration estimation in the duration encoding and reproduction stages. In four experiments, observers had to perform a dual, attention-sharing task: reproducing a given duration (primary) and memorizing a variable set of color patches (secondary). We found an increase in memory load (i.e., set size) during the duration-encoding stage to increase the central-tendency bias, while shortening the reproduced duration in general; in contrast, increasing the load during the reproduction stage prolonged the reproduced duration, without influencing the central tendency. By integrating an attentional-sharing account into a hierarchical Bayesian model, we were able to predict both the general over- and underestimation and the central-tendency effects observed in all four experiments. The model suggests that memory pressure during the encoding stage increases the sensory noise, which elevates the central-tendency effect. In contrast, memory pressure during the reproduction stage only influences the monitoring of elapsed time, leading to a general duration over-reproduction without impacting the central tendency.Competing Interest StatementThe authors have declared no competing interest

    Predictive coding in ASD: inflexible weighting of prediction errors when switching from stable to volatile environments

    Get PDF
    Individuals with autism spectrum disorder (ASD) have been widely reported to show atypicalities in predictive coding, though there remains a controversy regarding what causes such atypical processing. Suggestions range from overestimation of volatility to rigidity in the reaction to environmental changes. Here, we tested two accounts directly using duration reproduction of volatile and non-volatile interval sequences. Critically, both sequences had the same set of intervals but differed in their stimulus presentation orders. Comparing individuals with ASD vs. their matched controls, we found both groups to respond to the volatility in a similar manner, albeit with a generally reduced prior in the ASD group. Interestingly, though, relative to the control group, the ASD group exhibited a markedly reduced trust in the prior in the volatile trial session when this was performed after the non-volatile session, while both groups performed comparably in the reverse session order. Our findings suggest that it is not the learning of environmental volatility that is compromised in ASD. Rather, it is their response to a change of the volatility regimen from stable to volatile, which causes a highly inflexible weighting of prediction errors.Competing Interest StatementThe authors have declared no competing interest

    Acquisition and Use of 'Priors' in Autism: Typical in Deciding Where to Look, Atypical in Deciding What Is There

    Get PDF
    Individuals with Autism Spectrum Disorder (ASD) are thought to under-rely on prior knowledge in perceptual decision-making. This study examined whether this applies to decisions of attention allocation, of relevance for 'predictive-coding' accounts of ASD. In a visual search task, a salient but task-irrelevant distractor appeared with higher probability in one display half. Individuals with ASD learned to avoid 'attentional capture' by distractors in the probable region as effectively as control participants-indicating typical priors for deploying attention. However, capture by a 'surprising' distractor at an unlikely location led to greatly slowed identification of a subsequent target at that location-indicating that individuals with ASD attempt to control surprise (unexpected attentional capture) by over-regulating parameters in post-selective decision-making
    corecore